169 research outputs found
Time-partitioning simulation models for calculation on parallel computers
A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer
Turbomachinery CFD on parallel computers
The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations
Parallel solution of high-order numerical schemes for solving incompressible flows
A new parallel numerical scheme for solving incompressible steady-state flows is presented. The algorithm uses a finite-difference approach to solving the Navier-Stokes equations. The algorithms are scalable and expandable. They may be used with only two processors or with as many processors as are available. The code is general and expandable. Any size grid may be used. Four processors of the NASA LeRC Hypercluster were used to solve for steady-state flow in a driven square cavity. The Hypercluster was configured in a distributed-memory, hypercube-like architecture. By using a 50-by-50 finite-difference solution grid, an efficiency of 74 percent (a speedup of 2.96) was obtained
Projected health effects of realistic dietary changes to address freshwater constraints in India : a modelling study
Acknowledgements This study forms part of the Sustainable and Healthy Diets in India project supported by the Wellcome Trust's Our Planet, Our Health programme (grant number 103932). LA's PhD is funded by the Leverhulme Centre for Integrative Research on Agriculture and Health. SA is supported by a Wellcome Trust Capacity Strengthening Strategic Award-Extension phase (grant number WT084754/Z/08/A). We would like to thank Zaid Chalabi (London School of Hygiene & Tropical Medicine) for providing valuable guidance on the modelling methods.Peer reviewedPublisher PD
A new method for imaging nuclear threats using cosmic ray muons
Muon tomography is a technique that uses cosmic ray muons to generate three
dimensional images of volumes using information contained in the Coulomb
scattering of the muons. Advantages of this technique are the ability of cosmic
rays to penetrate significant overburden and the absence of any additional dose
delivered to subjects under study above the natural cosmic ray flux.
Disadvantages include the relatively long exposure times and poor position
resolution and complex algorithms needed for reconstruction. Here we
demonstrate a new method for obtaining improved position resolution and
statistical precision for objects with spherical symmetry
Obtaining material identification with cosmic ray radiography
The passage of muons through matter is mostly affected by their Coulomb
interactions with electrons and nuclei. The muon interactions with electrons
lead to continuous energy loss and stopping of muons, while their scattering
off nuclei lead to angular 'diffusion'. By measuring both the number of stopped
muons and angular changes in muon trajectories we can estimate density and
identify materials. Here we demonstrate the material identification using data
taken at Los Alamos with the Mini Muon Tracker.Comment: 10 pages, 9 figures, Accepted to AIP Advance
Running Head: GENDER DIFFERENCES IN SELF-HANDICAPPING
Abstract Research in the area of self-handicapping has consistently demonstrated a robust yet puzzling gender difference in the use of and evaluation of behavioral self-handicaps; women (1) are less likely to use these forms of handicaps, particularly those involving the actual or reported reduction of effort, and (2) evaluate the use of these handicaps by others more negatively than do men. The present research examines several possible explanations for these consistent gender differences and finds that the personal value placed on effort is an important mediator of these effects
Pseudo-prospective Evaluation of UCERF3-ETAS Forecasts During the 2019 Ridgecrest Sequence
The 2019 Ridgecrest sequence provides the first opportunity to evaluate Uniform California Earthquake Rupture Forecast v.3 with epidemic‐type aftershock sequences (UCERF3‐ETAS) in a pseudoprospective sense. For comparison, we include a version of the model without explicit faults more closely mimicking traditional ETAS models (UCERF3‐NoFaults). We evaluate the forecasts with new metrics developed within the Collaboratory for the Study of Earthquake Predictability (CSEP). The metrics consider synthetic catalogs simulated by the models rather than synoptic probability maps, thereby relaxing the Poisson assumption of previous CSEP tests. Our approach compares statistics from the synthetic catalogs directly against observations, providing a flexible approach that can account for dependencies and uncertainties encoded in the models. We find that, to the first order, both UCERF3‐ETAS and UCERF3‐NoFaults approximately capture the spatiotemporal evolution of the Ridgecrest sequence, adding to the growing body of evidence that ETAS models can be informative forecasting tools. However, we also find that both models mildly overpredict the seismicity rate, on average, aggregated over the evaluation period. More severe testing indicates the overpredictions occur too often for observations to be statistically indistinguishable from the model. Magnitude tests indicate that the models do not include enough variability in forecasted magnitude‐number distributions to match the data. Spatial tests highlight discrepancies between the forecasts and observations, but the greatest differences between the two models appear when aftershocks occur on modeled UCERF3‐ETAS faults. Therefore, any predictability associated with embedding earthquake triggering on the (modeled) fault network may only crystalize during the presumably rare sequences with aftershocks on these faults. Accounting for uncertainty in the model parameters could improve test results during future experiments.Maximilian J. Werner and Warner Marzocchi received funding from the European Union's Horizon 2020 research and innovation program (Number 821115, RISE: Real‐Time Earthquake Risk Reduction for a Resilient Europe). This research was supported by the Southern California Earthquake Center (SCEC; Contribution Number 10082). SCEC is funded by National Science Foundation (NSF) Cooperative Agreement EAR‐1600087 and the U.S. Geological Survey (USGS) Cooperative Agreement G17AC00047
Recommended from our members
A physics-based earthquake simulator replicates seismic hazard statistics across California
Seismic hazard models are important for society, feeding into building codes and hazard mitigation efforts. These models, however, rest on many uncertain assumptions and are difficult to test observationally because of the long recurrence times of large earthquakes. Physics-based earthquake simulators offer a potentially helpful tool, but they face a vast range of fundamental scientific uncertainties. We compare a physics-based earthquake simulator against the latest seismic hazard model for California. Using only uniform parameters in the simulator, we find strikingly good agreement of the long-term shaking hazard compared with the California model. This ability to replicate statistically based seismic hazard estimates by a physics-based model cross-validates standard methods and provides a new alternative approach needing fewer inputs and assumptions for estimating hazard
Precision Metrology Meets Cosmology: Improved Constraints on Ultralight Dark Matter from Atom-Cavity Frequency Comparisons
We conduct frequency comparisons between a state-of-the-art strontium optical
lattice clock, a cryogenic crystalline silicon cavity, and a hydrogen maser to
set new bounds on the coupling of ultralight dark matter to Standard Model
particles and fields in the mass range of eV. The key
advantage of this two-part ratio comparison is the differential sensitivities
to time variation of both the fine-structure constant and the electron mass,
achieving a substantially improved limit on the moduli of ultralight dark
matter, particularly at higher masses than typical atomic spectroscopic
results. Furthermore, we demonstrate an extension of the search range to even
higher masses by use of dynamical decoupling techniques. These results
highlight the importance of using the best performing atomic clocks for
fundamental physics applications as all-optical timescales are increasingly
integrated with, and will eventually supplant, existing microwave timescales.Comment: 11 pages, 10 figure
- …